منابع مشابه
Exponentiated Gradient Versus Gradient Descent for Linear Predictors
We consider two algorithms for on-line prediction based on a linear model. The algorithms are the well-known gradient descent (GD) algorithm and a new algorithm, which we call EG. They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG algorithm uses the components of the gra...
متن کاملGradient-enhanced surrogate modeling based on proper orthogonal decomposition
A new method for enhanced surrogate modeling of complex systems by exploiting gradient information is presented. The technique combines the proper orthogonal decomposition (POD) and interpolation methods capable of fitting both sampled input values and sampled derivative information like Kriging (aka spatial Gaussian processes). In contrast to existing POD-based interpolation approaches, the gr...
متن کاملQuantized Stochastic Gradient Descent: Communication versus Convergence
Parallel implementations of stochastic gradient descent (SGD) have received signif1 icant research attention, thanks to excellent scalability properties of this algorithm, 2 and to its efficiency in the context of training deep neural networks. A fundamental 3 barrier for parallelizing large-scale SGD is the fact that the cost of communicat4 ing the gradient updates between nodes can be very la...
متن کاملGradient Data and Gradient Grammars
1 Introduction Throughout the history of generative grammar there has been a tension between the categorical nature of the theories proposed and the gradient
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Homology, Homotopy and Applications
سال: 2021
ISSN: ['1532-0073', '1532-0081']
DOI: https://doi.org/10.4310/hha.2021.v23.n1.a5